Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Wireless traffic prediction based on federated learning
Shangjing LIN, Ji MA, Bei ZHUANG, Yueying LI, Ziyi LI, Tie LI, Jin TIAN
Journal of Computer Applications    2023, 43 (6): 1900-1909.   DOI: 10.11772/j.issn.1001-9081.2022050721
Abstract379)   HTML15)    PDF (4071KB)(258)       Save

Wireless communication network traffic prediction is of great significance to operators in network construction, base station wireless resource management and user experience improvement. However, the existing centralized algorithm models face the problems of complexity and timeliness, so that it is difficult to meet the traffic prediction requirements of the whole city scale. Therefore, a distributed wireless traffic prediction framework under cloud-edge collaboration was proposed to realize traffic prediction based on single grid base station with low complexity and communication overhead. Based on the distributed architecture, a wireless traffic prediction model based on federated learning was proposed. Each grid traffic prediction model was trained synchronously, JS (Jensen-Shannon) divergence was used to select grid traffic models with similar traffic distributions through the center cloud server, and Federated Averaging (FedAvg) algorithm was used to fuse the parameters of the grid traffic models with similar traffic distributions, so as to improve the model generalization and describe the regional traffic accurately at the same time. In addition, as the traffic in different areas within the city was highly differentiated in features, on the basis of the algorithm, a federated training method based on coalitional game was proposed. Combined with super-additivity criteria, the grids were taken as participants in the coalitional game, and screened. And the core of the coalitional game and the Shapley value were introduced for profit distribution to ensure the stability of the alliance, thereby improving the accuracy of model prediction. Experimental results show that taking Short Message Service (SMS) traffic as an example, compared with grid-independent training, the proposed model has the prediction error decreased most significantly in the suburb, with a decline range of 26.1% to 28.7%, the decline range is 0.7% to 3.4% in the urban area, and 0.8% to 4.7% in the downtown area. Compared with the grid-centralized training, the proposed model has the prediction error in the three regions decreased by 49.8% to 79.1%.

Table and Figures | Reference | Related Articles | Metrics
Monocular depth estimation method based on pyramid split attention network
Wenju LI, Mengying LI, Liu CUI, Wanghui CHU, Yi ZHANG, Hui GAO
Journal of Computer Applications    2023, 43 (6): 1736-1742.   DOI: 10.11772/j.issn.1001-9081.2022060852
Abstract262)   HTML11)    PDF (2767KB)(142)       Save

Aiming at the problem of inaccurate prediction of edges and the farthest region in monocular image depth estimation, a monocular depth estimation method based on Pyramid Split attention Network (PS-Net) was proposed. Firstly, based on Boundary-induced and Scene-aggregated Network (BS-Net), Pyramid Split Attention (PSA) module was introduced in PS-Net to process the spatial information of multi-scale features and effectively establish the long-term dependence between multi-scale channel attentions, thereby extracting the boundary with sharp change depth gradient and the farthest region. Then, the Mish function was used as the activation function in the decoder to further improve the performance of the network. Finally, training and evaluation were performed on NYUD v2 (New York University Depth dataset v2) and iBims-1 (independent Benchmark images and matched scans v1) datasets. Experimental results on iBims-1 dataset show that the proposed network reduced 1.42 percentage points compared with BS-Net in measuring Directed Depth Error (DDE), and has the proportion of correctly predicted depth pixels reached 81.69%. The above proves that the proposed network has high accuracy in depth prediction.

Table and Figures | Reference | Related Articles | Metrics
Feature selection method based on self-adaptive hybrid particle swarm optimization for software defect prediction
Zhenhua YU, Zhengqi LIU, Ying LIU, Cheng GUO
Journal of Computer Applications    2023, 43 (4): 1206-1213.   DOI: 10.11772/j.issn.1001-9081.2022030444
Abstract267)   HTML6)    PDF (1910KB)(128)       Save

Feature selection is a key step in data preprocessing for software defect prediction. Aiming at the problems of existing feature selection methods such as not significant dimension reduction performance and low classification accuracy of selected optimal feature subset, a feature selection method for software defect prediction based on Self-adaptive Hybrid Particle Swarm Optimization (SHPSO) was proposed. Firstly, combined with population partition, a self-adaptive weight update strategy based on Q-learning was designed, in which Q-learning was introduced to adaptively adjust the inertia weight according to the states of the particles. Secondly, to balance the global search ability in the early stage of the algorithm and the convergence speed in the later stage, the curve adaptivity based time-varying learning factors were proposed. Finally, a hybrid location update strategy was adopted to help particles jump out of the local optimal solution as soon as possible and increase the diversity of particles. Experiments were carried out on 12 public software defect datasets. The results show that the proposed method can effectively improve the classification accuracy of software defect prediction model and reduce the dimension of feature space compared with the method using all features, the commonly used traditional feature selection methods and the mainstream feature selection methods based on intelligent optimization algorithms. Compared with Improved Salp Swarm Algorithm (ISSA), the proposed method increases the classification accuracy by about 1.60% on average and reduces the feature subset size by about 63.79% on average. Experimental results show that the proposed method can select a feature subset with high classification accuracy and small size.

Table and Figures | Reference | Related Articles | Metrics
Image inpainting algorithm of multi-scale generative adversarial network based on multi-feature fusion
Gang CHEN, Yongwei LIAO, Zhenguo YANG, Wenying LIU
Journal of Computer Applications    2023, 43 (2): 536-544.   DOI: 10.11772/j.issn.1001-9081.2022010015
Abstract427)   HTML19)    PDF (4735KB)(163)       Save

Aiming at the problems in Multi-scale Generative Adversarial Networks Image Inpainting algorithm (MGANII), such as unstable training in the process of image inpainting, poor structural consistency, insufficient details and textures of the inpainted image, an image inpainting algorithm of multi-scale generative adversarial network was proposed based on multi-feature fusion. Firstly, aiming at the problems of poor structural consistency and insufficient details and textures, a Multi-Feature Fusion Module (MFFM) was introduced in the traditional generator, and a perception-based feature reconstruction loss function was introduced to improve the ability of feature extraction in the dilated convolutional network, thereby supplying more details and texture features for the inpainted image. Then, a perception-based feature matching loss function was introduced into local discriminator to enhance the discrimination ability of the discriminator, thereby improving the structural consistency of the inpainted image. Finally, a risk penalty term was introduced into the adversarial loss function to meet the Lipschitz continuity condition, so that the network was able to converge rapidly and stably in the training process. On the dataset CelebA, compared with MANGII, the proposed multi-feature fusion image inpainting algorithm can converges faster. Meanwhile, the Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) of the images inpainted by the proposed algorithm are improved by 0.45% to 8.67% and 0.88% to 8.06% respectively compared with those of the images inpainted by the baseline algorithms, and Frechet Inception Distance score (FID) of the images inpainted by the proposed algorithm is reduced by 36.01% to 46.97% than the images inpainted by the baseline algorithms. Experimental results show that the inpainting performance of the proposed algorithm is better than that of the baseline algorithms.

Table and Figures | Reference | Related Articles | Metrics
Fast sanitization algorithm based on BCU-Tree and dictionary for high-utility mining
Chunyong YIN, Ying LI
Journal of Computer Applications    2023, 43 (2): 413-422.   DOI: 10.11772/j.issn.1001-9081.2021122161
Abstract289)   HTML10)    PDF (2958KB)(95)       Save

Privacy Preserving Utility Mining (PPUM) has problems of long sanitization time, high computational complexity, and high side effect. To solve these problems, a fast sanitization algorithm based on BCU-Tree and Dictionary (BCUTD) for high-utility mining was proposed. In the algorithm, a new tree structure called BCU-Tree was presented to store sensitive item information, and based on the bitwise operator coding model, the tree construction time and search space were reduced. The dictionary table was used to store all nodes in the tree structure, and only the dictionary table needed to be accessed when the sensitive item was modified. Finally, the sanitization process was completed. In the experiments on four different datasets, BCUTD algorithm has better performance on sanitization time and high side effect than Hiding High Utility Item First (HHUIF), Maximum Sensitive Utility-MAximum item Utility (MSU-MAU), and Fast Perturbation Using Tree and Table structures (FPUTT). Experimental results show that BCUTD algorithm can effectively speed up the sanitization process, reduce the side effect and computational complexity of the algorithm.

Table and Figures | Reference | Related Articles | Metrics
Single image super-resolution method based on residual shrinkage network in real complex scenes
Ying LI, Chao HUANG, Chengdong SUN, Yong XU
Journal of Computer Applications    2023, 43 (12): 3903-3910.   DOI: 10.11772/j.issn.1001-9081.2022111697
Abstract191)   HTML2)    PDF (3309KB)(105)       Save

There are very few paired high and low resolution images in the real world. The traditional single image Super-Resolution (SR) methods typically use pairs of high-resolution and low-resolution images to train models, but these methods use the way of synthetizing dataset to obtain training set, which only consider bilinear downsampling as image degradation process. However, the image degradation process in the real word is complex and diverse, and traditional image super-resolution methods have poor reconstruction performance when facing real unknown degraded images. Aiming at those problems, a single image super-resolution method was proposed for real complex scenes. Firstly, high- and low-resolution images were captured by the camera with different focal lengths, and these images were registered as image pairs to form a dataset CSR(Camera Super-Resolution dataset) of various scenes. Secondly, to simulate the image degradation process in the real world as much as possible, the image degradation model was improved by the parameter randomization of degradation factors and the nonlinear combination degradation. Besides, the dataset of high- and low-resolution image pairs and the image degradation model were combined to synthetize training set. Finally, as the degradation factors were considered in the dataset, residual shrinkage network and U-Net were embedded into the benchmark model to reduce the redundant information caused by degradation factors in the feature space as much as possible. Experimental results indicate that compared with the BSRGAN (Blind Super-Resolution Generative Adversarial Network) method, under complex degradation conditions, the proposed method improves the PSNR by 0.7 dB and 0.14 dB, and improves SSIM by 0.001 and 0.031 respectively on the RealSR and CSR test sets. The proposed method has better objective indicators and visual effect than the existing methods on complex degradation datasets.

Table and Figures | Reference | Related Articles | Metrics
Test suite selection method based on commit prioritization and prediction model
Meiying LIU, Qiuhui YANG, Xiao WANG, Chuang CAI
Journal of Computer Applications    2022, 42 (8): 2534-2539.   DOI: 10.11772/j.issn.1001-9081.2021061016
Abstract188)   HTML4)    PDF (694KB)(84)       Save

In order to reduce the regression test set and improve the efficiency of regression test in the Continuous Integration (CI) environment, a regression test suite selection method for the CI environment was proposed. First, the commits were prioritized based on the historical failure rate and execution rate of each test suite related to each commit. Then, the machine learning method was used to predict the failure rates of the test suites involved in each commit, and the test suite with the higher failure rate were selected. In this method, the commit prioritization technology and the test suite selection technology were combined to ensure the increase of the failure detection rate and the reduction of the test cost. Experimental results on Google’s open-source dataset show that compared to the methods with the same commit prioritization method and test suite selection method, the proposed method has the highest improvement in the Average Percentage of Faults Detected per cost (APFDc) by 1% to 27%; At the same cost of test time, the TestRecall of this method increases by 33.33 to 38.16 percentage points, the ChangeRecall increases by 15.67 to 24.52 percentage points, and the test suite SelectionRate decreases by about 6 percentage points.

Table and Figures | Reference | Related Articles | Metrics
Power data analysis based on financial technical indicators
An YANG, Qun JIANG, Gang SUN, Jie YIN, Ying LIU
Journal of Computer Applications    2022, 42 (3): 904-910.   DOI: 10.11772/j.issn.1001-9081.2021030447
Abstract303)   HTML7)    PDF (785KB)(91)       Save

Considering the lack of effective trend feature descriptors in existing methods, financial technical indicators such as Vertical Horizontal Filter (VHF) and Moving Average Convergence/Divergence (MACD) were introduced into power data analysis. An anomaly detection algorithm and a load forecasting algorithm using financial technical indicators were proposed. In the proposed anomaly detection algorithm, the thresholds of various financial technical indicators were determined based on statistics, and then the abnormal behaviors of user power consumption were detected using threshold detection. In the proposed load forecasting algorithm, 14 dimensional daily load characteristics related to financial technical indicators were extracted, and a Long Shot-Term Memory (LSTM) load forecasting model was built. Experimental results on industrial power data of Hangzhou City show that the proposed load forecasting algorithm reduces the Mean Absolute Percentage Error (MAPE) to 9.272%, which is lower than that of Autoregressive Integrated Moving Average (ARIMA), Prophet and Support Vector Machine (SVM) algorithms by 2.322, 24.175 and 1.310 percentage points, respectively. The results show that financial technical indicators can be effectively applied to power data analysis.

Table and Figures | Reference | Related Articles | Metrics
Energy-efficient strategy for disks in RAMCloud
LU Liang YU Jiong YING Changtian WANG Zhengying LIU Jiankuang
Journal of Computer Applications    2014, 34 (9): 2518-2522.   DOI: 10.11772/j.issn.1001-9081.2014.09.2518
Abstract168)      PDF (777KB)(358)       Save

The emergence of RAMCloud has improved user experience of Online Data-Intensive (OLDI) applications. However, its energy consumption is higher than traditional cloud data centers. An energy-efficient strategy for disks under this architecture was put forward to solve this problem. Firstly, the fitness function and roulette wheel selection which belong to genetic algorithm were introduced to choose those energy-saving disks to implement persistent data backup; secondly, reasonable buffer size was needed to extend average continuous idle time of disks, so that some of them could be put into standby during their idle time. The simulation experimental results show that the proposed strategy can effectively save energy by about 12.69% in a given RAMCloud system with 50 servers. The buffer size has double impacts on energy-saving effect and data availability, which must be weighed.

Reference | Related Articles | Metrics
Predicting inconsistent change probability of code clone based on latent Dirichlet allocation model
YI Lili ZHANG Liping WANG Chunhui TU Ying LIU Dongsheng
Journal of Computer Applications    2014, 34 (6): 1788-1791.   DOI: 10.11772/j.issn.1001-9081.2014.06.1788
Abstract171)      PDF (748KB)(408)       Save

The activities of the programmers including copy, paste and modify result in a lot of code clone in the software systems. However, the inconsistent change of code clone is the main reason that causes program error and increases maintenance costs in the evolutionary process of the software version. To solve this problem, a new research method was proposed. The mapping relationship between the clone groups was built at first. Then the theme of lineal cloning cluster was extracted using Latent Dirichlet Allocation (LDA) model. Finally, the inconsistent change probability of code clone was predicted. A software which contains eight versions was tested and an obvious discrimination was got. The experimental results show that the method can effectively predict the probability of inconsistent change and be used for evaluating quality and credibility of software.

Reference | Related Articles | Metrics
Posture recognition method based on Kinect predefined bone
ZHANG Dan CHEN Xingwen ZHAO Shuying LI Jiwei BAI Yu
Journal of Computer Applications    2014, 34 (12): 3441-3445.  
Abstract290)      PDF (740KB)(796)       Save

In view of the problems that posture recognition based on vision requires a lot on environment and has low anti-interference capacity, a posture recognition method based on predefined bone was proposed. The algorithm detected human body by combining Kinect multi-scale depth and gradient information. And it recognized every part of body based on random forest which used positive and negative samples, built the body posture vector. According to the posture category, optimal separating hyperplane and kernel function were built by using improved support vector machine to classify postures. The experimental results show that the recognition rate of this scheme is 94.3%, and it has good real-time performance, strong anti-interference, good robustness, etc.

Reference | Related Articles | Metrics
Fast image stitching algorithm based on improved speeded up robust feature
ZHU Lin WANG Ying LIU Shuyun ZHAO Bo
Journal of Computer Applications    2014, 34 (10): 2944-2947.   DOI: 10.11772/j.issn.1001-9081.2014.10.2944
Abstract236)      PDF (639KB)(379)       Save

An fast image stitching algorithm based on improved Speeded Up Robust Feature (SURF) was proposed to overcome the real-time and robustness problems of the original SURF based stitching algorithms. The machine learning method was adopted to build a binary classifier, which identified the critical feature points obtained by SURF and removed the non-critical feature points. In addition, the Relief-F algorithm was used to reduce the dimension of the improved SURF descriptor to accomplish image registration. The weighted threshold fusion algorithm was adopted to achieve seamless image stitching. Several experiments were conducted to verify the real-time performance and robustness of the improved algorithm. Furthermore, the efficiency of image registration and the speed of image stitching were improved.

Reference | Related Articles | Metrics
Passenger route choice behavior on transit network with real-time information at stops
ZENG Ying LI Jun ZHU Hui
Journal of Computer Applications    2013, 33 (10): 2964-2968.  
Abstract576)      PDF (835KB)(510)       Save
Along with the development of intelligent transportation information system, intelligent public transportation system is gradually popularized. Such information system is designed to provide all kinds of real-time information to transit passengers on the conditions of the network, and hence affect passengers’ travel choice behavior and improve passenger travel convenience and flexibility, so as to improve the social benefit and service level of the public transit system. Concerning the particularity of the transit network, with electronic bus stop information of Chengdu as an example, a questionnaire was designed to investigate passengers’ route choice behavior and travel intention. Qualitative and quantitative analysis and random utility theory were adopted,based on Logit model and mixed Logit model, route choice models were established, using characteristic variables of various options and passengers’ personal socio-economic attributes as explanatory variables. The method of Monte Carlo simulation and maximum likelihood were used to estimate parameters. The results indicate that the differences of route choice behavior resulting from individual preferences can be reasonably interpreted by mixed Logit model, which helps us better understand the complexity of transit behavior, so as to guide the application.
Related Articles | Metrics
Improved wavelet denoising with dual-threshold and dual-factor function
REN Zhong LIU Ying LIU Guodong HUANG Zhen
Journal of Computer Applications    2013, 33 (09): 2595-2598.   DOI: 10.11772/j.issn.1001-9081.2013.09.2595
Abstract667)      PDF (632KB)(463)       Save
Since the traditional wavelet threshold functions have some drawbacks such as the non-continuity on the points of threshold, large deviation of estimated wavelet coefficients, Gibbs phenomenon and distortion are generated and Signal-to-Noise Ratio (SNR) can be hardly improved for the denoised signal. To overcome these drawbacks, an improved wavelet threshold function was proposed. Compared with the soft, hard, semi-soft threshold function and others, this function was not only continuous on the points of threshold and more convenient to be processed, but also was compatible with the performances of traditional functions and the practical flexibility was greatly improved via adjusting dual threshold parameters and dual variable factors. To verify this improved function, a series of simulation experiments were performed, the SNR and Root-Mean-Square Error (RMSE) values were compared between different denoising methods. The experimental results demonstrate that the smoothness and distortion are greatly enhanced. Compared with soft function, its SNR increases by 22.2% and its RMSE decreases by 42.6%.
Related Articles | Metrics
Noise reduction of optimization weight based on energy of wavelet sub-band coefficients
WANG Kai LIU Jiajia YUAN Jianying JIANG Xiaoliang XIONG Ying LI Bailin
Journal of Computer Applications    2013, 33 (08): 2341-2345.  
Abstract769)      PDF (751KB)(337)       Save
Concerning the key problems of selecting threshold function in wavelet threshold denoising, in order to address the discontinuity of conventional threshold function and large deviation existing in the estimated wavelet coefficients, a continuous adaptive threshold function in the whole wavelet domain was proposed. It fully considered the characteristics of different sub-band coefficients in different scales, and set the energy of sub-band coefficients in different scales as threshold function's initial weights. Optimal weights were iteratively solved by using interval advanced-retreat method and golden section method, so as to adaptively improve approximation level between estimated and decomposed wavelet coefficients. The experimental results show that the proposed method can both efficiently reduce noise and simultaneously preserve the edges and details of image, also achieve higher Peak Signal-to-Noise Ratio (PSNR) under different noise standard deviations.
Related Articles | Metrics
Parallel Delaunay algorithm design in lunar surface terrain reconstruction system
WANG Zhe GAO Sanhong ZHENG Huiying LI Lichun
Journal of Computer Applications    2013, 33 (08): 2177-2183.  
Abstract749)      PDF (1078KB)(587)       Save
Triangulation procedure is one of the time bottle-necks of 3D reconstruction system. To increase the speed of triangulation procedure, a parallel Delaunay algorithm was designed based on a shared memory multi-core computer. The algorithm employed divide-and-conquer method and improved conquer procedure and Delaunay mesh optimization procedure to avoid data competition problem. Experiments were conducted on datasets with range from 500000 to 5000000 gathered on the lunar surface simulation ground, and the speedup of the algorithm reached 6.44. In addition, the algorithm complexity and parallel efficiency were fully analyzed and the algorithm was applied in the lunar surface terrain reconstruction system to realize fast virtual terrain reconstruction.
Reference | Related Articles | Metrics
Two-hop incentive compatible routing protocol in disruptiontolerant networks
WEN Ding CAI Ying LI Zhuo
Journal of Computer Applications    2013, 33 (06): 1500-1504.   DOI: 10.3724/SP.J.1087.2013.01500
Abstract823)      PDF (746KB)(730)       Save
A Two-hop Incentive Compatible (TIC) routing protocol was proposed for DisruptionTolerant Networks (DTN) to defend the degradation of communication performance caused by selfish nodes. TIC selected the optimal relay node, which took both the encounter probability and transmission cost into account and ensured that nodes could maximize their profit when they reported their encounter probability and transmission cost honestly. At the same time, a signature technology based on bilinear map was introduced to ensure the selected relay nodes to get the payment securely, which can effectively prevent the malicious nodes from tampering the messages.
Reference | Related Articles | Metrics
Workflow customization technology for collaborative SaaS platform of industrial chains
CAO Shuai WANG Shuying LIU Shuya
Journal of Computer Applications    2013, 33 (05): 1450-1455.   DOI: 10.3724/SP.J.1087.2013.01450
Abstract959)      PDF (863KB)(701)       Save
A workflow customization model for a collaborative Software-as-a-Service (SaaS) platform of industrial chains was proposed based on the mapping between workflow with operation and the custom relationship between the enterprise groups with workflow. A drive rule and a load and control mode of the workflow model were proposed to provide the dynamic load based on the identities of users. The proposed method was validated by the workflow customization on out application of the after service of automobile parts industrial chains, which showed that it met the needs of workflow customization on the collaborative SaaS platform of industrial chains.
Reference | Related Articles | Metrics
Adaptive Chaos Fruit Fly Optimization Algorithm
HAN Junying LIU Chengzhong
Journal of Computer Applications    2013, 33 (05): 1313-1333.   DOI: 10.3724/SP.J.1087.2013.01313
Abstract1270)      PDF (727KB)(853)       Save
In order to overcome the problems of low convergence precision and easily relapsing into local extremum in basic Fruit Fly Optimization Algorithm(FOA), by introducing the chaos algorithm into the evolutionary process of basic FOA, an improved FOA called Adaptive Chaos FOA (ACFOA)is proposed. In the condition of local convergence, chaos algorithm is applied to search the global optimum in the outside space of convergent area and to jump out of local extremum and continue to optimize. Experimental results show that the new algorithm has the advantages of better global searching ability, speeder convergence and more precise convergence.
Reference | Related Articles | Metrics
Transit assignment based on stochastic user equilibrium with passengers' perception consideration
ZENG Ying LI Jun ZHU Hui
Journal of Computer Applications    2013, 33 (04): 1149-1152.   DOI: 10.3724/SP.J.1087.2013.01149
Abstract716)      PDF (763KB)(551)       Save
Concerning the special nature of the transit network, the generalized path concept that maybe easily describe passenger route choice behavior was put forward. The key cost of each path was considered. Based on the analytical framework of cumulative prospect theory and passengers' perception, a stochastic user equilibrium assignment model was developed. A simple example revealed that the limitations of the traditional method can be effectively improved by this proposed method. The basic assumption of complete rationality in traditional model was improved. It helped us enhance our understanding of the complexity of urban public transportation behavior and the rule of decision-making. The facility layout and planning of the public transportation can be determined with this result, as well as the evaluation of the level of service. In addition, it can also be used as valid data support for traffic guidance.
Reference | Related Articles | Metrics
Fruit fly optimization algorithm based on bacterial chemotaxis
HAN Junying LIU Chengzhong
Journal of Computer Applications    2013, 33 (04): 964-966.   DOI: 10.3724/SP.J.1087.2013.00964
Abstract1192)      PDF (582KB)(757)       Save
In this paper, attraction and exclusion operations of bacterial chemotaxis were introduced into original Fruit Fly Optimization Algorithm (FOA), and FOA based on Bacterial Chemotaxis (BCFOA) was proposed. Exclusion (to escape the worst individual) or attraction (to be attracted by the best individual) was decided to perform by judging the fitness variance is zero or no, so that the problem of premature convergence caused by the loss of population diversity, which resulted from the fact that individuals only were attracted by the best one in FOA, was solved. The experimental results show that the new algorithm has the advantages of better global searching ability, and faster and more precise convergence.
Reference | Related Articles | Metrics
Stereo matching algorithm based on fast-converging belief propagation
ZHANG Hongying LIU Yixuan YANG Yu
Journal of Computer Applications    2013, 33 (02): 484-494.   DOI: 10.3724/SP.J.1087.2013.00484
Abstract962)      PDF (624KB)(378)       Save
Concerning the high computation complexity and low efficiency in traditional stereo matching method based on belief propagation, a fast-converging algorithm was proposed. When calculating the confidence level of each pixel, the algorithm only utilized the information translated from the neighboring pixels in an adaptive support window, while ignoring the impact of the pixels beyond the window. The experimental results show that the proposed algorithm can reduce 40% to 50% of computation time while maintaining the matching accuracy. Therefore, it can meet the real-time requirement for stereo matching.
Related Articles | Metrics
Shadow removal algorithm based on Gaussian mixture model
ZHANG Hongying LI Hong SUN Yigang
Journal of Computer Applications    2013, 33 (01): 31-34.   DOI: 10.3724/SP.J.1087.2013.00031
Abstract984)      PDF (637KB)(838)       Save
Shadow removal is one of the most important parts of moving object detection in the field of intelligent video since the shadow definitely affects the recognition result. In terms of the disadvantage of shadow removal methods utilizing texture, a new algorithm based on Gaussian Mixture Model (GMM) and YCbCr color space was proposed. Firstly, moving regions were detected using GMM. Secondly, the Gaussian mixture shadow model was built through analyzing the color statistics of the difference between the foreground and background of the moving regions in YCbCr color space. Lastly, the threshold value of the shadow was obtained according to the Gaussian probability distribution in YCbCr color space. More than 70 percent of shadow pixels in sequence images of the experiments could be detected by the algorithm accurately. The experimental results show that the proposed algorithm is efficient and robust in object extraction and shadow detection under different scenes.
Reference | Related Articles | Metrics
Improved fuzzy C-means clustering algorithm based on distance correction
LOU Xiao-jun LI Jun-ying LIU Hai-tao
Journal of Computer Applications    2012, 32 (03): 646-648.   DOI: 10.3724/SP.J.1087.2012.00646
Abstract1291)      PDF (446KB)(600)       Save
Based on Euclidean distance, the classic Fuzzy C-Means (FCM) clustering algorithm has the limitation of equal partition trend for data sets. And the clustering accuracy is lower when the distribution of data points is not spherical. To solve these problems, a distance correction factor based on dot density was introduced. Then a distance matrix with this factor was built for measuring the differences between data points. Finally, the new matrix was applied to modify the classic FCM algorithm. Two sets of experiments using artificial data and UCI data were operated, and the results show that the proposed algorithm is suitable for non-spherical data sets and outperforms the classic FCM algorithm in clustering accuracy.
Reference | Related Articles | Metrics
Novel image blocking Encryption algorithm based on spatiotemporal chaos system
ZHENG Hong-ying LI Wen-jie XIAO Di
Journal of Computer Applications    2011, 31 (11): 3053-3055.   DOI: 10.3724/SP.J.1087.2011.03053
Abstract1281)      PDF (471KB)(427)       Save
In this paper, a novel image blocks encryption algorithm based on Coupled Map Lattice (CML) system was presented to solve the parallel problem of the common image encryption algorithm. The basic idea is to divide the image into blocks,and then used the block number as the spatial parameter of CML to iterate the chaos system. The chaotic stream was used to conduct confusion operations with part of the image,the result of which was applied to encrypt the other part. The proposed method can encrypt image in parallel and can also satisfy color image encryption. The simulation process indicates that the algorithm is easy to realize with low computation complexity,high sensitivity,high speed,high security and other properties simultaneously.
Related Articles | Metrics
Zero watermark algorithm for binary document images based on texture spectrum
CHEN Xia WANG Xi-chang ZHANG Hua-ying LIU Jiang
Journal of Computer Applications    2011, 31 (09): 2378-2381.   DOI: 10.3724/SP.J.1087.2011.02378
Abstract1435)      PDF (611KB)(423)       Save
Concerning the copyright protection of binary document images, a zero watermark algorithm was proposed. This algorithm constructed the texture image based on Local Binary Pattern (LBP), and then zero watermark information was constructed from the texture spectral histograms of the texture image. This method had a better invisibility compared to other text image watermarking, and the original image information would not be changed. Watermark attacks including image cropping, adding noise and rotation operators were tested. The experimental results show that the proposed zero watermark algorithm has a good performance in robustness. And these attack operators have little impact on zero watermark information, and the algorithm is of stability with the lowest standard correlation above 0.85.
Related Articles | Metrics
Implementation of LU decomposition and Laplace algorithms on GPU
CHEN Ying LIN Jin-xian LV Tun
Journal of Computer Applications    2011, 31 (03): 851-855.   DOI: 10.3724/SP.J.1087.2011.00851
Abstract1372)      PDF (736KB)(994)       Save
With the advancement of Graphics Processing Unit (GPU) and the creation of its new feature of programmability, many algorithms have been successfully transferred to GPU. LU decomposition and Laplace algorithms are the core in scientific computation, but computation is usually too large; therefore, a speedup method was proposed. The implementation was based on Nvidia's GPU which supported Compute Unified Device Architecture (CUDA). Dividing tasks on CPU and GPU, using shared memory on GPU to increase the speed of data access, eliminating the branch in GPU program and stripping the matrix were used to speed up the algorithms. The experimental results show that with the size of matrix increasing, the algorithm based on GPU has a good speedup compared with the algorithm based on CPU.
Related Articles | Metrics
Grid resource distribution based on fuzzy multi-objective decision making
Jian-hong FENG Ying LIU Ying LUO Wen-guang CHEN
Journal of Computer Applications   
Abstract1461)      PDF (601KB)(979)       Save
A new strategy of resources distribution based on fuzzy decision making was provided, and a model of multi-objective fuzzy decision making was built to solve the resources distribution problems in grid computing. Also implementation of the strategy was described. Analysis proves that this model not only provides the best resources to tasks, but also enhances the success of matching resources and the efficiency of using resources.
Related Articles | Metrics
Anomaly detection method by clustering normal data
Na-na LI Zheng ZHAO Bo-ying LIU Jun-hua GU
Journal of Computer Applications   
Abstract1875)      PDF (634KB)(1075)       Save
A new anomaly detection method was proposed based on positive selection. The method learned the characteristic of "self" space by clustering, and then selected typical samples from every cluster to construct detectors. And positive selection was used to detect anomalies. The new algorithm is not only effective in certain application with large number of "self" samples, but also avoids the shortcoming by randomly selecting sample in VDetector. Experimental results on Ring data and biomedical data show that the new method is more effective in anomaly detection.
Related Articles | Metrics
Official document classification using stochastic keyword generation
Ying liu
Journal of Computer Applications   
Abstract1666)      PDF (859KB)(844)       Save
Design and implementation of a government official document classification system with topic phrase were presented. This system fully considered the value of topic phrase in the classification preprocessing, and made feature space transformation and dimension reduction by the stochastic keyword generation and the Bootstrapping. It differed from the traditional text classification preprocessing, and the performance of the official document classification system was improved. Official document classification using stochastic keyword generation outperforms other methods.
Related Articles | Metrics